Learning Parities in the Mistake-Bound model

نویسندگان

  • Harry Buhrman
  • David García-Soriano
  • Arie Matsliah
چکیده

We study the problem of learning parity functions that depend on at most k variables (kparities) attribute-efficiently in the mistake-bound model. We design a simple, deterministic, polynomial-time algorithm for learning k-parities with mistake bound O(n1− 1 k ). This is the first polynomial-time algorithm to learn ω(1)-parities in the mistake-bound model with mistake bound o(n). Using the standard conversion techniques from the mistake-bound model to the PAC model, our algorithm can also be used for learning k-parities in the PAC model. In particular, this implies a slight improvement over the results of Klivans and Servedio [1] for learning k-parities in the PAC model. We also show that the Õ(n) time algorithm from [1] that PAC-learns k-parities with sample complexity O(k log n) can be extended to the mistake-bound model.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

On learning k-parities with and without noise

We first consider the problem of learning k-parities in the on-line mistake-bound model: given a hidden vector x ∈ {0, 1} with |x| = k and a sequence of “questions” a1, a2, · · · ∈ {0, 1} , where the algorithm must reply to each question with 〈ai, x〉 (mod 2), what is the best tradeoff between the number of mistakes made by the algorithm and its time complexity? We improve the previous best resu...

متن کامل

Separating Distribution-Free and Mistake-Bound Learning Models over the Boolean Domain

Two of the most commonly used models in computational learning theory are the distribution-free model in which examples are chosen from a fixed but arbitrary distribution, and the absolute mistake-bound model in which examples are presented in an arbitrary order. Over the Boolean domain {0, 1}, it is known that if the learner is allowed unlimited computational resources then any concept class l...

متن کامل

Counting Rankings

In this paper, I present a recursive algorithm that calculates the number of rankings that are consistent with a set of data (i.e. optimal candidates) in the framework of Optimality Theory. The ability to compute this quantity, which I call the r-volume, makes possible a simple and effective Bayesian heuristic in learning – all else equal, choose the candidate preferred by the highest number of...

متن کامل

On Noise-Tolerant Learning of Sparse Parities and Related Problems

We consider the problem of learning sparse parities in the presence of noise. For learning parities on r out of n variables, we give an algorithm that runs in time poly ( log 1 δ , 1 1−2η ) n( +o(1))r/2 and uses only r log(n/δ)ω(1) (1−2η)2 samples in the random noise setting under the uniform distribution, where η is the noise rate and δ is the confidence parameter. From previously known result...

متن کامل

Expected Mistake Bound Model for On-Line Reinforcement Learning

We propose a model of eecient on-line reinforcement learning based on the expected mistake bound framework introduced by Haussler, Littlestone and Warmuth (1987). The measure of performance we use is the expected diierence between the total reward received by the learning agent and that received by an agent behaving optimally from the start. We call this expected diierence the cumulative mistak...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • Inf. Process. Lett.

دوره 111  شماره 

صفحات  -

تاریخ انتشار 2009